Competitive Function Approximation for Reinforcement Learning IRI Technical Report

نویسندگان

  • Alejandro Agostini
  • Enric Celaya
چکیده

The application of reinforcement learning to problems with continuous domains requires representing the value function by means of function approximation. We identify two aspects of reinforcement learning that make the function approximation process hard: non-stationarity of the target function and biased sampling. Non-stationarity is the result of the bootstrapping nature of dynamic programming where the value function is estimated using its current approximation. Biased sampling occurs when some regions of the state space are visited too often, causing a reiterated updating with similar values which fade out the occasional updates of infrequently sampled regions. We propose a competitive approach for function approximation where many different local approximators are available at a given input and the one with expectedly best approximation is selected by means of a relevance function. The local nature of the approximators allows their fast adaptation to non-stationary changes and mitigates the biased sampling problem. The coexistence of multiple approximators updated and tried in parallel permits obtaining a good estimation much faster than would be possible with a single approximator. Experiments in different benchmark problems show that the competitive strategy provides a faster and more stable learning than non-competitive approaches. Institut de Robòtica i Informàtica Industrial (IRI) Consejo Superior de Investigaciones Cient́ıficas (CSIC) Universitat Politècnica de Catalunya (UPC) Llorens i Artigas 4-6, 08028, Barcelona, Spain Tel (fax): +34 93 401 5750 (5751) http://www.iri.upc.edu Corresponding author: A. Agostini email:[email protected] Bernstein Center for Computational Neuroscience 37077 Goettingen, Germany

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Probability Density Estimation of the Q Function for Reinforcement Learning IRI Technical Report

Performing Q-Learning in continuous state-action spaces is a problem still unsolved for many complex applications. The Q function may be rather complex and can not be expected to fit into a predefined parametric model. In addition, the function approximation must be able to cope with the high non-stationarity of the estimated q values, the on-line nature of the learning with a strongly biased s...

متن کامل

Computer Science Technical Report Decision Tree Function Approximation in Reinforcement Learning

We present a decision tree based approach to function approximation in reinforcement learning. We compare our approach with table lookup and a neural network function approximator on three problems: the well known mountain car and pole balance problems as well as a simulated automobile race car. We find that the decision tree can provide better learning performance than the neural network funct...

متن کامل

Value Function Approximation in Reinforcement Learning Using the Fourier Basis

We describe the Fourier basis, a linear value function approximation scheme based on the Fourier series. We empirically demonstrate that it performs well compared to radial basis functions and the polynomial basis, the two most popular fixed bases for linear value function approximation, and is competitive with learned proto-value functions.

متن کامل

An Empirical Investigation into Function Approximation with Reinforcement Learning

In the reinforcement learning framework, standard, table-based look-up methods for value functions converge to the optimal solution, yet unfortunately these methods are intractable for complex real-world control problems. A common approach to overcome this problem are so-called function approximation techniques that generalise over their input spaces. In this paper we study the capabilities of ...

متن کامل

Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning

In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014